未配对的图像到图像翻译旨在找到源域和目标域之间的映射。为了减轻缺乏源图像的监督标签的问题,通过假设未配对的图像之间的可逆关系,已经提出了基于周期矛盾的方法来保存图像结构。但是,此假设仅使用图像对之间的有限对应关系。最近,使用基于贴片的正/负学习,对比度学习(CL)已被用来进一步研究未配对图像翻译中的图像对应关系。基于贴片的对比例程通过自相似度计算获得阳性,并将其余的斑块视为负面。这种灵活的学习范式以低成本获得辅助上下文化信息。由于负面的样本人数令人印象深刻,因此我们有好奇心,我们基于一个问题进行了调查:是否需要所有负面的对比度学习?与以前的CL方法不同,在本文中,我们从信息理论的角度研究了负面因素,并通过稀疏和对补丁进行排名来引入一种新的负面修剪技术,以用于未配对的图像到图像翻译(PUT) 。所提出的算法是有效的,灵活的,并使模型能够稳定地学习相应贴片之间的基本信息。通过将质量置于数量上,只需要几个负贴片即可获得更好的结果。最后,我们通过比较实验验证了模型的优势,稳定性和多功能性。
translated by 谷歌翻译
这项研究提出了一种基于深度学习的超声(US)图像引导放射疗法的跟踪方法。拟议的级联深度学习模型由注意力网络,基于掩模区域的卷积神经网络(Mask R-CNN)和长期短期记忆(LSTM)网络组成。注意网络从美国图像到可疑的具有里程碑意义的运动区域,以减少搜索区域。然后,面膜R-CNN在减少区域中产生多个利益区域(ROI)建议,并通过三个网络头确定拟议的地标:边界框回归,提案分类和地标分段。 LSTM网络对连续的图像框架之间的时间关系建模,以进行边界框回归和建议分类。为了合并最终建议,根据顺序框架之间的相似性设计选择方法。该方法在肝脏美国跟踪数据集中测试了医疗图像计算和计算机辅助干预措施(MICCAI)2015年的挑战,其中有三位经验丰富的观察者注释了地标,以获得其平均位置。在24个鉴于我们具有地面真相的序列的24个序列上,所有地标的平均跟踪误差为0.65 +/- 0.56毫米,所有地标的误差均在2 mm之内。我们进一步测试了从测试数据集中的69个地标上提出的模型,该模型具有与训练模式相似的图像模式,从而导致平均跟踪误差为0.94 +/- 0.83 mm。我们的实验结果表明,我们提出的方法使用US图像跟踪肝解剖学地标的可行性和准确性,为放射治疗期间的主动运动管理提供了潜在的解决方案。
translated by 谷歌翻译
预训练的编码器是通用特征提取器,可用于许多下游任务。自我监督学习的最新进展可以使用大量未标记的数据预先培训高效编码器,从而导致新兴编码器作为服务(EAAS)。预先训练的编码器可能被视为机密性,因为其培训需要大量数据和计算资源及其公开发布可能有助于滥用AI,例如,以进行深层效果。在本文中,我们提出了第一次称为Stolenencoder的攻击,以窃取预训练的图像编码器。我们评估了由我们自己预先训练的多个目标编码器和三个现实世界目标编码器的stolenencoder,包括由Google预先培训的Imagenet编码器,由OpenAI预先培训的剪辑编码器以及Clarifai的一般嵌入式编码器部署为付费EAAS。我们的结果表明,我们被盗的编码器与目标编码器具有相似的功能。特别是,构建在目标编码器和被盗的下游分类器具有相似的精度。此外,使用StolenenCoder窃取目标编码器所需的数据和计算资源要比从头开始进行预训练要少得多。我们还探索了三个防御能力,这些防御能力扰动目标编码器产生的矢量。我们的结果表明,这些防御措施不足以减轻Stolenencoder。
translated by 谷歌翻译
数据中毒攻击和后门攻击旨在通过修改,添加和/或删除一些仔细选择的训练示例来破坏机器学习分类器,使得损坏的分类器将预测不正确,因为攻击者的期望。用于数据中毒攻击和后门攻击的最先进的认证防御的关键概念是创建大多数投票机制来预测测试示例的标签。此外,每个选民是在训练数据集的子集上培训的基本分类器。典型简单学习算法,如K最近邻居(knn)和半径最近的邻居(RNN)具有内在的多数投票机制。在这项工作中,我们表明,KNN和RNN的内在大多数投票机制已经提供了针对数据中毒攻击和后门攻击的经过认证的稳健性。此外,我们对MNIST和CIFAR10的评估结果表明,KNN和RNN的内在认证稳健性担保优于最先进的认证防御。我们的结果作为未来数据中毒攻击和后门攻击的未来认证防御标准基线。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译